Sublinear-Time Optimization for High-Dimensional Learning
نویسندگان
چکیده
Across domains, the scale of data and complexity of machine learning models have both been increasing greatly in recent years. For many such large scale models of interest, tractable estimation without access to extensive computational infrastructure is still an open problem. In this thesis, we approach the tractability of large-scale estimation problems through the lens of leveraging sparse structure inherent in the learning objective, which allows us to develop algorithms sublinear in the size of domain by greedily searching for atoms comprising the optimal structure. We address the following three questions for each model of interest: (a) how to formulate model estimation as a high-dimensional optimization problem with tractable sparse structure, (b) how to efficiently i.e. in sublinear time, search for the optimal structure, (c) how to guarantee fast convergence of our optimization algorithm? By answering these questions, we develop state-of-the-art learning algorithms for varied domains as extreme classification, structured prediction with large output domains, and design new estimators for latent-variable models that enjoy polynomial computational and sample complexities without restrictive assumptions.
منابع مشابه
Beating SGD: Learning SVMs in Sublinear Time
We present an optimization approach for linear SVMs based on a stochasticprimal-dual approach, where the primal step is akin to an importance-weightedSGD, and the dual step is a stochastic update on the importance weights. Thisyields an optimization method with a sublinear dependence on the training setsize, and the first method for learning linear SVMs with runtime less the...
متن کاملApproximating Semidefinite Programs in Sublinear Time
In recent years semidefinite optimization has become a tool of major importance in various optimization and machine learning problems. In many of these problems the amount of data in practice is so large that there is a constant need for faster algorithms. In this work we present the first sublinear time approximation algorithm for semidefinite programs which we believe may be useful for such p...
متن کاملOptimization of Dimensional Deviations in Wax Patterns for Investment Casting
Investment casting is a versatile manufacturing process to produce high quality parts with high dimensional accuracy. The process begins with the manufacture of wax patterns. The dimensional accuracy of the model affects the quality of the finished part. The present study investigated the control and optimization of dimensional deviations in wax patterns. A mold for an H-shaped wax pattern was ...
متن کاملA Non-generative Framework and Convex Relaxations for Unsupervised Learning
We will describe a novel theoretical framework for unsupervised learning which is not based on generative assumptions. It is comparative, and allows to avoid known computational hardness results and improper algorithms based on convex relaxations. We show how several families of unsupervised learning models, which were previously only analyzed under probabilistic assumptions and are otherwise p...
متن کاملEnhanced Comprehensive Learning Cooperative Particle Swarm Optimization with Fuzzy Inertia Weight (ECLCFPSO-IW)
So far various methods for optimization presented and one of most popular of them are optimization algorithms based on swarm intelligence and also one of most successful of them is Particle Swarm Optimization (PSO). Prior some efforts by applying fuzzy logic for improving defects of PSO such as trapping in local optimums and early convergence has been done. Moreover to overcome the problem of i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017